Scheduling Data Intensive Workloads through Virtualization on MapReduce based Clouds

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Scheduling Data Intensive Workloads through Virtualization on MapReduce based Clouds

MapReduce has become a popular programming model for running data intensive applications on the cloud. Completion time goals or deadlines of MapReduce jobs set by users are becoming crucial in existing cloudbased data processing environments like Hadoop. There is a conflict between the scheduling MR jobs to meet deadlines and “data locality” (assigning tasks to nodes that contain their input da...

متن کامل

Practical Size-based Scheduling for MapReduce Workloads

We present the Hadoop Fair Sojourn Protocol (HFSP) scheduler, which implements a size-based scheduling discipline for Hadoop. The benefits of size-based scheduling disciplines are well recognized in a variety of contexts (computer networks, operating systems, etc...), yet, their practical implementation for a system such as Hadoop raises a number of important challenges. With HFSP, which is ava...

متن کامل

Scheduling of data-intensive workloads in a brokered virtualized environment

Providing performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, for which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. With the increased prevalence of brokerage services in cloud platforms,...

متن کامل

Based on the MapReduce Model for Data - intensive Computing of Energy Scheduling Algorithm Strategy

In this study, based on the consideration of energy consumption, we take to improve the strategy of the MapReduce job scheduling algorithm, in order to reduce the average response time for task scheduling of interactive jobs in the network. In accordance with the job priority grouping to adjust the scheduling task response time which can reduce the impact of network congestion, with good result...

متن کامل

FLEX: A Slot Allocation Scheduling Optimizer for MapReduce Workloads

Originally, MapReduce implementations such as Hadoop employed First In First Out (fifo) scheduling, but such simple schemes cause job starvation. The Hadoop Fair Scheduler (hfs) is a slot-based MapReduce scheme designed to ensure a degree of fairness among the jobs, by guaranteeing each job at least some minimum number of allocated slots. Our prime contribution in this paper is a different, fle...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Distributed and Parallel systems

سال: 2012

ISSN: 2229-3957

DOI: 10.5121/ijdps.2012.3411